6 research outputs found
SemanticLock: An authentication method for mobile devices using semantically-linked images
We introduce SemanticLock, a single factor graphical authentication solution
for mobile devices. SemanticLock uses a set of graphical images as password
tokens that construct a semantically memorable story representing the user`s
password. A familiar and quick action of dragging or dropping the images into
their respective positions either in a \textit{continous flow} or in
\textit{discrete} movements on the the touchscreen is what is required to use
our solution.
The authentication strength of the SemanticLock is based on the large number
of possible semantic constructs derived from the positioning of the image
tokens and the type of images selected. Semantic Lock has a high resistance to
smudge attacks and it equally exhibits a higher level of memorability due to
its graphical paradigm.
In a three weeks user study with 21 participants comparing SemanticLock
against other authentication systems, we discovered that SemanticLock
outperformed the PIN and matched the PATTERN both on speed, memorability, user
acceptance and usability. Furthermore, qualitative test also show that
SemanticLock was rated more superior in like-ability. SemanticLock was also
evaluated while participants walked unencumbered and walked encumbered carrying
"everyday" items to analyze the effects of such activities on its usage
Biomove: Biometric user identification from human kinesiological movements for virtual reality systems
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (\u3c50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems
Biomove: Biometric user identification from human kinesiological movements for virtual reality systems
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (\u3c50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems